Learning Visual Features to Predict Hand Orientations

نویسنده

  • Justus H. Piater
چکیده

This paper is a preliminary account of current work on a visual system that learns to aid in robotic grasping and manipulation tasks. Localized features are learned of the visual scene that correlate reliably with the orientation of a dextrous robotic hand during haptically guided grasps. On the basis of these features, hand orientations are recommended for future gasping operations. The learning process is instancebased, on-line and incremental, and the interaction between visual and haptic systems is loosely anthropomorphic. It is conjectured that critical spatial information can be learned on the basis of features of visual appearance, without explicit geometric representations or planning.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Decoding the future from past experience: learning shapes predictions in early visual cortex

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our...

متن کامل

Using Visual Features to Predict Successful Grasp Parameters

Visual features act as an important part for hand pre-shaping during human grasp. This paper focuses on using visual features in an image to predict successful grasp types, which can be used in the robot grasp manipulation. The following questions are discussed: First, how to recognize different shapes in a given image. Second, how to train the system using image-grasp pairs. Third, evaluate th...

متن کامل

Hand Gesture Recognition with Batch and Reinforcement Learning

In this paper, we present a system for real-time recognition of user-defined static hand gestures captured via a traditional web camera. We use SURF descriptors to get the bag-of-visual-words features of the user’s hand, and use these features to train a multi-class supervised learning model. We choose the best learning model from (SVM, Neural Networks, Decision Trees, and Random Forests) and t...

متن کامل

Feature reliability determines specificity and transfer of perceptual learning in orientation search

Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL's effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained ...

متن کامل

Learning visual saliency using topographic independent component analysis

Understanding the underlying mechanisms that drive human visual attention is a topic of immense interest. Most of the work is focused on extracting manually selected features that might resemble the human visual processing pathway and using a combination of those features to train a classifier that learns to predict where humans look. In contrast, we will learn the features in an unsupervised w...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002